181 research outputs found

    Planets: Integrated Services for Digital Preservation

    Get PDF
    The Planets Project is developing services and technology to address core challenges in digital preservation. This article introduces the motivation for this work, describes the extensible technical architecture and places the Planets approach into the context of the Open Archival Information System (OAIS) Reference Model. It also provides a scenario demonstrating Planets’ usefulness in solving real-life digital preservation problems and an overview of the project’s progress to date

    Interoperable infrastructures for digital research: a proposed pathway for enabling transformation

    Get PDF
    Governments, research organisations, cultural institutions, and commercial entities have invested substantial funds creating digital assets to enable new research in the arts and humanities. These assets have grown to include millions of items and petabytes of material covering all forms of content – manuscripts, monographs, maps, images, sound, and more. Unfortunately, scholars have been unable to fully exploit these digital assets. The supporting infrastructures are restrictive. The assets are distributed unevenly across organisations and systems. Access restrictions unpredictably limit where, how and who can use items. This poster will outline a pathway to remedy this unacceptable state of affairs. It will explore the need for a simple-to-use infrastructure for digital scholarship. Built primarily using off-the-shelf technologies and services, we argue that such an interoperable infrastructure should, as far as possible, work like something the user already knows: it should allow the researcher to bring their own content, tools and creativity to a familiar environment. Where we envisage it differing from a local PC setup is by hosting otherwise difficult to obtain and too big to download digital content, offering the computational capacity required to quickly analyse big data using automated processes, and providing network services capable of robustly supporting digitally-driven research. A key context of this proposed poster is research infrastructure developments around cloud, virtual and remote workflows. Notable among these are ongoing cyber-infrastructure work at the HathiTrust Research Centre1 and the deployed cloud research infrastructure used by the European Bioinformatics Institute.2 Whilst these observations and experiences point to a potentially crucial role for infrastructure in humanities research, we remain mindful of the robust critiques of recent digital humanities infrastructure projects. Quinn3 These critiques have highlighted how infrastructure development should not make strong assumptions about how researchers work, what tools they need, the sorts of problems that they will strive to solve, or even the specialised standards that they will employ. Our proposed pathway avoids these known problems by suggesting that researchers must be enabled to bring their own tools, work in whatever way they want, use any workflow, and address any sort of problem. We envisage this being achieved by infrastructure development that works with many digital content providers, supports a wide range of content types, and is embedded within arts and humanities research that uses a variety of data-driven methodologies. It would support growth in big data research in the arts and humanities using researcher appropriate standards and guidelines. The informal, conversational setting of a poster session will prove a valuable opportunity to visit the key questions and problems around digital research infrastructure. These include: What are the benefits of scholars being able to use off-the-shelf technologies to work with big data across major content holders?How can these infrastructures enable transformative research?Do hybrid cloud infrastructures provide a sustainable approach to service provision? Such infrastructure could establish the foundation for scholarly work with large scale content collections for years to come, enabling in turn transformative research that uncovers the value hidden in these digital assets and society to benefit from its investment. Such transformation requires leading-edge researchers, and eventually the majority of researchers, to adopt, learn and use new methods and techniques; to not just answer old questions in new ways but to arrive at new answers and to start asking entirely new questions as a consequence. This proposed infrastructure pathway aims to explore the next steps towards making this transformation a reality This poster builds on experience providing researchers with digital content. Scholars increasingly demand scalable access to large quantities of digital content – big data – that they can analyse using their own software and tools. Early on, the amounts of digital data were small; it was possible to provide copies or enable network downloads. With the growing volumes of big data, this is no longer plausible. Instead of moving hundreds of terabytes of data to researchers, we must allow researchers to bring their tools to the data. This is consistent with changes in the broader IT landscape. We have established five principles to guide our pathway: Keep it simple. Any new infrastructure should be simple to use and understand.Lower the bar. Any new infrastructure should not expose or require users to understand new or complex technologies or processes. It should, as much as possible, work like something they already doBring your own tools. Users should be able to employ the tools that they already understand and work with. For example, if a researcher uses Mathematica for image analysis in her office, she should be able to use it on large collections of digital assets distributed across multiple content organisations.Be creative. Users should be able to use data in creative, novel, unexpected ways. Many systems and infrastructures limit what users can do.Start small and grow big. Users should be able to try things out; explore, experiment and debug; and then deploy on large content sets. References 1. Beth Plale, Opportunies and Challenges of Text Mining HathiTrust Digital Library, Koninklijke Bibiotheek, 15 November 2013 www.hathitrust.org/documents/kb-plalehtrc-nov2013.pdf 2. Creating a Global Alliance to Enable Responsible Sharing of Genomic and Clinical Data, 3 June 2013 www.ebi.ac.uk/sites/ebi.ac.uk/files/shared/images/News/Global_Alliance_White_Paper_3_June_2013.pdf 3. Quinn Dombrowski, What ever happened to Project Bamboo?, DH201

    Infinite-randomness critical point in the two-dimensional disordered contact process

    Get PDF
    We study the nonequilibrium phase transition in the two-dimensional contact process on a randomly diluted lattice by means of large-scale Monte-Carlo simulations for times up to 101010^{10} and system sizes up to 8000×80008000 \times 8000 sites. Our data provide strong evidence for the transition being controlled by an exotic infinite-randomness critical point with activated (exponential) dynamical scaling. We calculate the critical exponents of the transition and find them to be universal, i.e., independent of disorder strength. The Griffiths region between the clean and the dirty critical points exhibits power-law dynamical scaling with continuously varying exponents. We discuss the generality of our findings and relate them to a broader theory of rare region effects at phase transitions with quenched disorder. Our results are of importance beyond absorbing state transitions because according to a strong-disorder renormalization group analysis, our transition belongs to the universality class of the two-dimensional random transverse-field Ising model.Comment: 13 pages, 12 eps figures, final version as publishe

    Can Lightning Produce Significant Levels of Mass-Independent Oxygen Isotopic Fractionation in Nebular Dust?

    Get PDF
    Based on recent evidence that oxide grains condensed from a plasma will contain oxygen that is mass independently fractionated compared to the initial composition of the vapor, we present a first attempt to evaluate the potential magnitude of this effect on dust in the primitive solar nebula. This assessment relies on previous studies of nebular lightning to provide reasonable ranges of physical parameters to form a very simple model to evaluate the plausibility that lightning could affect a significant fraction of nebular dust and that such effects could cause a significant change in the oxygen isotopic composition of solids in the solar nebula over time. If only a small fraction of the accretion energy is dissipated as lightning over the volume of the inner solar nebula, then a large fraction of nebular dust will be exposed to lightning. If the temperature of such bolts is a few percent of the temperatures measured in terrestrial discharges, then dust will vaporize and recondense in an ionized environment. Finally, if only a small average decrease is assumed in the O-16 content of freshly condensed dust, then over the last 5 million years of nebular accretion the average delta O-17 of the dust could increase by more than 30 per mil. We conclude that it is possible that the measured " slope 1" oxygen isotope line measured in meteorites and their components represents a time-evolution sequence of nebular dust over the last several million years of nebular evolution O-16-rich materials formed first, then escaped further processing as the average isotopic composition of the dust graduaUy became increasingly depleted in O-16

    Implementing Metadata that Guide Digital Preservation Services

    Get PDF
    Effective digital preservation depends on a set of preservation services that work together to ensure that digital objects can be preserved for the long-term. These services need digital preservation metadata, in particular, descriptions of the properties that digital objects may have and descriptions of the requirements that guide digital preservation services. This paper analyzes how these services interact and use these metadata and develops a data dictionary to support them

    Modelling Organizational Preservation Goals to Guide Digital Preservation

    Get PDF
    This paper is an extended and updated version of the work reported at iPres 2008. Digital preservation activities can only succeed if they go beyond the technical properties of digital objects. They must consider the strategy, policy, goals, and constraints of the institution that undertakes them and take into account the cultural and institutional framework in which data, documents and records are preserved. Furthermore, because organizations differ in many ways, a one-size-fits-all approach cannot be appropriate. Fortunately, organizations involved in digital preservation have created documents describing their policies, strategies, work-flows, plans, and goals to provide guidance. They also have skilled staff who are aware of sometimes unwritten considerations. Within Planets (Farquhar & Hockx-Yu, 2007), a four-year project co-funded by the European Union to address core digital preservation challenges, we have analyzed preservation guiding documents and interviewed staff from libraries, archives, and data centres that are actively engaged in digital preservation. This paper introduces a conceptual model for expressing the core concepts and requirements that appear in preservation guiding documents. It defines a specific vocabulary that institutions can reuse for expressing their own policies and strategies. In addition to providing a conceptual framework, the model and vocabulary support automated preservation planning tools through an XML representation

    CD81 and claudin 1 coreceptor association: role in hepatitis C virus entry.

    Get PDF
    Hepatitis C virus (HCV) is an enveloped positive-stranded RNA hepatotropic virus. HCV pseudoparticles infect liver-derived cells, supporting a model in which liver-specific molecules define HCV internalization. Three host cell molecules have been reported to be important entry factors or receptors for HCV internalization: scavenger receptor BI, the tetraspanin CD81, and the tight junction protein claudin-1 (CLDN1). None of the receptors are uniquely expressed within the liver, leading us to hypothesize that their organization within hepatocytes may explain receptor activity. Since CD81 and CLDN1 act as coreceptors during late stages in the entry process, we investigated their association in a variety of cell lines and human liver tissue. Imaging techniques that take advantage of fluorescence resonance energy transfer (FRET) to study protein-protein interactions have been developed. Aequorea coerulescens green fluorescent protein- and Discosoma sp. red-monomer fluorescent protein-tagged forms of CD81 and CLDN1 colocalized, and FRET occurred between the tagged coreceptors at comparable frequencies in permissive and nonpermissive cells, consistent with the formation of coreceptor complexes. FRET occurred between antibodies specific for CD81 and CLDN1 bound to human liver tissue, suggesting the presence of coreceptor complexes in liver tissue. HCV infection and treatment of Huh-7.5 cells with recombinant HCV E1-E2 glycoproteins and anti-CD81 monoclonal antibody modulated homotypic (CD81-CD81) and heterotypic (CD81-CLDN1) coreceptor protein association(s) at specific cellular locations, suggesting distinct roles in the viral entry process

    αβ T cell receptor recognition of self-phosphatidylinositol presented by CD1b

    Get PDF
    CD1 glycoproteins present lipid-based antigens to T-cell receptors (TCRs). A role for CD1b in T-cell–mediated autoreactivity was proposed when it was established that CD1b can present self-phospholipids with short alkyl chains (∼C34) to T cells; however, the structural characteristics of this presentation and recognition are unclear. Here, we report the 1.9 Å resolution binary crystal structure of CD1b presenting a self-phosphatidylinositol-C34:1 and an endogenous scaffold lipid. Moreover, we also determined the 2.4 Å structure of CD1b–phosphatidylinositol complexed to an autoreactive αβ TCR, BC8B. We show that the TCR docks above CD1b and directly contacts the presented antigen, selecting for both the phosphoinositol headgroup and glycerol neck region via antigen remodeling within CD1b and allowing lateral escape of the inositol moiety through a channel formed by the TCR α-chain. Furthermore, through alanine scanning mutagenesis and surface plasmon resonance, we identified key CD1b residues mediating this interaction, with Glu-80 abolishing TCR binding. We in addition define a role for both CD1b α1 and CD1b α2 molecular domains in modulating this interaction. These findings suggest that the BC8B TCR contacts both the presented phospholipid and the endogenous scaffold lipid via a dual mechanism of corecognition. Taken together, these data expand our understanding into the molecular mechanisms of CD1b-mediated T-cell autoreactivity
    • …
    corecore